已经提出了几种类型的依赖关系,用于对存在规则本体的静态分析,有望对计算属性的见解以及一组规则(例如,基于本体的查询答案)的实际使用。不幸的是,这些依赖性很少实施,因此在实践中几乎没有实现它们的潜力。我们专注于两种规则依赖性 - 积极的relians和限制 - 以及为其有效计算设计和实施优化的算法。关于多达100,000多个规则的现实本体论实验显示了我们方法的可扩展性,这使我们能够实现一些先前提出的应用程序作为实际案例研究。特别是,我们可以在何种程度上分析基于规则的自下而上的推理方法可以保证在实际本体论中产生无冗余的“精益”知识图(所谓的核心)。
translated by 谷歌翻译
Graph convolutional neural networks have shown significant potential in natural and histopathology images. However, their use has only been studied in a single magnification or multi-magnification with late fusion. In order to leverage the multi-magnification information and early fusion with graph convolutional networks, we handle different embedding spaces at each magnification by introducing the Multi-Scale Relational Graph Convolutional Network (MS-RGCN) as a multiple instance learning method. We model histopathology image patches and their relation with neighboring patches and patches at other scales (i.e., magnifications) as a graph. To pass the information between different magnification embedding spaces, we define separate message-passing neural networks based on the node and edge type. We experiment on prostate cancer histopathology images to predict the grade groups based on the extracted features from patches. We also compare our MS-RGCN with multiple state-of-the-art methods with evaluations on both source and held-out datasets. Our method outperforms the state-of-the-art on both datasets and especially on the classification of grade groups 2 and 3, which are significant for clinical decisions for patient management. Through an ablation study, we test and show the value of the pertinent design features of the MS-RGCN.
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
Binary Neural Networks (BNNs) are showing tremendous success on realistic image classification tasks. Notably, their accuracy is similar to the state-of-the-art accuracy obtained by full-precision models tailored to edge devices. In this regard, BNNs are very amenable to edge devices since they employ 1-bit to store the inputs and weights, and thus, their storage requirements are low. Also, BNNs computations are mainly done using xnor and pop-counts operations which are implemented very efficiently using simple hardware structures. Nonetheless, supporting BNNs efficiently on mobile CPUs is far from trivial since their benefits are hindered by frequent memory accesses to load weights and inputs. In BNNs, a weight or an input is stored using one bit, and aiming to increase storage and computation efficiency, several of them are packed together as a sequence of bits. In this work, we observe that the number of unique sequences representing a set of weights is typically low. Also, we have seen that during the evaluation of a BNN layer, a small group of unique sequences is employed more frequently than others. Accordingly, we propose exploiting this observation by using Huffman Encoding to encode the bit sequences and then using an indirection table to decode them during the BNN evaluation. Also, we propose a clustering scheme to identify the most common sequences of bits and replace the less common ones with some similar common sequences. Hence, we decrease the storage requirements and memory accesses since common sequences are encoded with fewer bits. We extend a mobile CPU by adding a small hardware structure that can efficiently cache and decode the compressed sequence of bits. We evaluate our scheme using the ReAacNet model with the Imagenet dataset. Our experimental results show that our technique can reduce memory requirement by 1.32x and improve performance by 1.35x.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Survival modeling in healthcare relies on explainable statistical models; yet, their underlying assumptions are often simplistic and, thus, unrealistic. Machine learning models can estimate more complex relationships and lead to more accurate predictions, but are non-interpretable. This study shows it is possible to estimate hospitalization for congestive heart failure by a 30 seconds single-lead electrocardiogram signal. Using a machine learning approach not only results in greater predictive power but also provides clinically meaningful interpretations. We train an eXtreme Gradient Boosting accelerated failure time model and exploit SHapley Additive exPlanations values to explain the effect of each feature on predictions. Our model achieved a concordance index of 0.828 and an area under the curve of 0.853 at one year and 0.858 at two years on a held-out test set of 6,573 patients. These results show that a rapid test based on an electrocardiogram could be crucial in targeting and treating high-risk individuals.
translated by 谷歌翻译
Bayesian optimization (BO) is one of the most effective methods for closed-loop experimental design and black-box optimization. However, a key limitation of BO is that it is an inherently sequential algorithm (one experiment is proposed per round) and thus cannot directly exploit high-throughput (parallel) experiments. Diverse modifications to the BO framework have been proposed in the literature to enable exploitation of parallel experiments but such approaches are limited in the degree of parallelization that they can achieve and can lead to redundant experiments (thus wasting resources and potentially compromising performance). In this work, we present new parallel BO paradigms that exploit the structure of the system to partition the design space. Specifically, we propose an approach that partitions the design space by following the level sets of the performance function and an approach that exploits partially-separable structures of the performance function found. We conduct extensive numerical experiments using a reactor case study to benchmark the effectiveness of these approaches against a variety of state-of-the-art parallel algorithms reported in the literature. Our computational results show that our approaches significantly reduce the required search time and increase the probability of finding a global (rather than local) solution.
translated by 谷歌翻译
我们研究不同损失功能对医学图像病变细分的影响。尽管在处理自然图像时,跨凝结(CE)损失是最受欢迎的选择,但对于生物医学图像分割,由于其处理不平衡的情况,软骰子损失通常是首选的。另一方面,这两个功能的组合也已成功地应用于此类任务中。一个较少研究的问题是在存在分布(OOD)数据的情况下所有这些损失的概括能力。这是指在测试时间出现的样本,这些样本是从与训练图像不同的分布中得出的。在我们的情况下,我们将模型训练在始终包含病变的图像上,但是在测试时间我们也有无病变样品。我们通过全面的实验对内窥镜图像和糖尿病脚图像的溃疡分割进行了全面的实验,分析了不同损失函数对分布性能的最小化对分布性能的影响。我们的发现令人惊讶:在处理OOD数据时,CE-DICE损失组合在分割分配图像中表现出色,这使我们建议通过这种问题采用CE损失,因为它的稳健性和能够概括为OOD样品。可以在\ url {https://github.com/agaldran/lesion_losses_ood}找到与我们实验相关的代码。
translated by 谷歌翻译
糖尿病性视网膜病变(DR)是发达国家工人衰老人群中失明的主要原因之一,这是由于糖尿病的副作用降低了视网膜的血液供应。深度神经网络已被广泛用于自动化系统中,以在眼底图像上进行DR分类。但是,这些模型需要大量带注释的图像。在医疗领域,专家的注释昂贵,乏味且耗时。结果,提供了有限数量的注释图像。本文提出了一种半监督的方法,该方法利用未标记的图像和标记的图像来训练一种检测糖尿病性视网膜病的模型。提出的方法通过自我监督的学习使用无监督的预告片,然后使用一小部分标记的图像和知识蒸馏来监督微调,以提高分类任务的性能。在Eyepacs测试和Messidor-2数据集中评估了此方法,仅使用2%的Eyepacs列车标记图像,分别使用0.94和0.89 AUC。
translated by 谷歌翻译
文本VQA旨在回答需要了解图像中文本提示的问题。尽管现有的文本VQA方法取得了长足的进步,但它们的性能仍遭受了人类标记的问题解答(QA)对不足。但是,我们观察到,通常在现有数据集中没有完全利用场景文本 - 每个图像中只有一小部分文本参与了带注释的QA活动。这导致大量有用的信息浪费。为了解决这种缺陷,我们开发了一种新方法来通过明确利用每个图像的场景上下文中可用的现有文本来生成高质量和多样化的质量质量对。具体而言,我们建议,TAG是一种文本感知的视觉问题 - 答案生成的结构,该结构学会使用多模式变压器来生成有意义且准确的QA样品。该体系结构通过将生成的QA对与初始培训数据相结合,从而利用了未充满激光的场景文本信息,并增强了文本VQA模型的场景理解。对两个众所周知的Text-VQA基准(TextVQA和ST-VQA)的广泛实验结果表明,我们提议的标签有效地扩大了训练数据,有助于提高文本VQA性能而无需额外的标签努力。此外,我们的模型优于预先通过大规模数据进行训练的最先进方法。代码将公开可用。
translated by 谷歌翻译